了解潮汐能流中鱼类的丰度和分布对于评估通过向栖息地引入潮汐能设备所带来的风险很重要。但是,适合潮汐能的潮汐电流流量通常是高度湍流的,这使回声器数据的解释变得复杂。必须从用于生物分析的数据中排除受夹带空气回报污染的水柱的部分。应用单个常规算法来识别夹带的空气的深度不足,对于不连续,深度动态,多孔的边界而言,随着潮流流速而变化。使用Fundy湾的潮汐能示威场所进行的案例研究,我们描述了具有基于U-NET的体系结构的深机学习模型的开发和应用。我们的模型Echofilter对湍流条件的动态范围高度响应,并且对边界位置的细微差别敏感,产生了夹带的空气边界线,在移动下降方面的平均误差为0.33亿,并且在移动下降范围内为0.5-1.5-1.0m关于固定的上调数据,不到现有算法解决方案的一半。该模型的整体注释与人类细分有很高的一致性,移动下降记录的联合会得分为99%,而固定的上方录音记录为92-95%。与手动编辑当前可用算法所需的线路位置所需的时间相比,手动编辑所需的时间减少了50%。由于最初的自动放置的改进,模型的实现允许提高线路位置的标准化和可重复性。
translated by 谷歌翻译
我们通过应用更为理论证明的操作员来寻求改善神经网络中的汇集操作。我们证明Logsumexp提供了用于登录的自然或操作员。当一个人对池中汇集运算符中的元素数正确时,这将成为$ \ text {logavgexp}:= \ log(\ text {mean}(\ exp(x)))$。通过引入单个温度参数,LogavgeXP将其操作数的最大值平滑地过渡到平均值(在限制性情况下发现$ 0 ^ + $和$ t \ to + \ idty $)。在各种深度神经网络架构中,我们在实验测试的LogavgeXP,无论是没有学习的温度参数,都在电脑视觉中的各种深度神经网络架构中。
translated by 谷歌翻译
The choice of activation functions and their motivation is a long-standing issue within the neural network community. Neuronal representations within artificial neural networks are commonly understood as logits, representing the log-odds score of presence of features within the stimulus. We derive logit-space operators equivalent to probabilistic Boolean logic-gates AND, OR, and XNOR for independent probabilities. Such theories are important to formalize more complex dendritic operations in real neurons, and these operations can be used as activation functions within a neural network, introducing probabilistic Boolean-logic as the core operation of the neural network. Since these functions involve taking multiple exponents and logarithms, they are computationally expensive and not well suited to be directly used within neural networks. Consequently, we construct efficient approximations named $\text{AND}_\text{AIL}$ (the AND operator Approximate for Independent Logits), $\text{OR}_\text{AIL}$, and $\text{XNOR}_\text{AIL}$, which utilize only comparison and addition operations, have well-behaved gradients, and can be deployed as activation functions in neural networks. Like MaxOut, $\text{AND}_\text{AIL}$ and $\text{OR}_\text{AIL}$ are generalizations of ReLU to two-dimensions. While our primary aim is to formalize dendritic computations within a logit-space probabilistic-Boolean framework, we deploy these new activation functions, both in isolation and in conjunction to demonstrate their effectiveness on a variety of tasks including image classification, transfer learning, abstract reasoning, and compositional zero-shot learning.
translated by 谷歌翻译
Recent research in clustering face embeddings has found that unsupervised, shallow, heuristic-based methods -- including $k$-means and hierarchical agglomerative clustering -- underperform supervised, deep, inductive methods. While the reported improvements are indeed impressive, experiments are mostly limited to face datasets, where the clustered embeddings are highly discriminative or well-separated by class (Recall@1 above 90% and often nearing ceiling), and the experimental methodology seemingly favors the deep methods. We conduct a large-scale empirical study of 17 clustering methods across three datasets and obtain several robust findings. Notably, deep methods are surprisingly fragile for embeddings with more uncertainty, where they match or even perform worse than shallow, heuristic-based methods. When embeddings are highly discriminative, deep methods do outperform the baselines, consistent with past results, but the margin between methods is much smaller than previously reported. We believe our benchmarks broaden the scope of supervised clustering methods beyond the face domain and can serve as a foundation on which these methods could be improved. To enable reproducibility, we include all necessary details in the appendices, and plan to release the code.
translated by 谷歌翻译
Text-based games present a unique class of sequential decision making problem in which agents interact with a partially observable, simulated environment via actions and observations conveyed through natural language. Such observations typically include instructions that, in a reinforcement learning (RL) setting, can directly or indirectly guide a player towards completing reward-worthy tasks. In this work, we study the ability of RL agents to follow such instructions. We conduct experiments that show that the performance of state-of-the-art text-based game agents is largely unaffected by the presence or absence of such instructions, and that these agents are typically unable to execute tasks to completion. To further study and address the task of instruction following, we equip RL agents with an internal structured representation of natural language instructions in the form of Linear Temporal Logic (LTL), a formal language that is increasingly used for temporally extended reward specification in RL. Our framework both supports and highlights the benefit of understanding the temporal semantics of instructions and in measuring progress towards achievement of such a temporally extended behaviour. Experiments with 500+ games in TextWorld demonstrate the superior performance of our approach.
translated by 谷歌翻译
自动编码是表示学习的一种流行方法。常规的自动编码器采用对称编码编码程序和简单的欧几里得潜在空间,以无监督的方式检测隐藏的低维结构。这项工作介绍了一个图表自动编码器,其中具有不对称编码编码过程,该过程可以包含其他半监督信息,例如类标签。除了增强使用复杂的拓扑结构和几何结构处理数据的能力外,这些模型还可以成功区分附近的数据,但仅与少量监督相交并与歧管相交。此外,该模型仅需要较低的复杂性编码器,例如局部线性投影。我们讨论了此类网络的理论近似能力,基本上取决于数据歧管的固有维度,而不是观测值的维度。我们对合成和现实世界数据的数值实验验证了所提出的模型可以有效地通过附近的多类,但分离不同类别,重叠的歧管和具有非平凡拓扑的歧管的数据。
translated by 谷歌翻译
我们证明了顺序蒙特卡洛(SMC)算法的有限样品复杂性,该算法仅需要相关的马尔可夫核的局部混合时间。当目标分布是多模式的,而马尔可夫内核的全局混合速度很慢时,我们的边界特别有用。在这种情况下,我们的方法确定了SMC比相应的Markov链蒙特卡洛(MCMC)估计量的好处。通过依次控制SMC重采样程序引入的偏差来解决全局混合。我们将这些结果应用于对数凸出分布的混合物下的近似期望获得复杂性界限,并表明SMC为某些困难的多模式问题提供了完全多项式时间随机近似方案,而相应的Markov链采样器的指数呈呈呈速度速度。最后,我们比较了通过我们在相同问题上使用钢结战的马尔可夫链的现有界限获得的界限。
translated by 谷歌翻译
基于Shapley值的功能归因在解释机器学习模型中很受欢迎。但是,从理论和计算的角度来看,它们的估计是复杂的。我们将这种复杂性分解为两个因素:(1)〜删除特征信息的方法,以及(2)〜可拖动估计策略。这两个因素提供了一种天然镜头,我们可以更好地理解和比较24种不同的算法。基于各种特征删除方法,我们描述了多种类型的Shapley值特征属性和计算每个类型的方法。然后,基于可进行的估计策略,我们表征了两个不同的方法家族:模型 - 不合时宜的和模型特定的近似值。对于模型 - 不合稳定的近似值,我们基准了广泛的估计方法,并将其与Shapley值的替代性但等效的特征联系起来。对于特定于模型的近似值,我们阐明了对每种方法的线性,树和深模型的障碍至关重要的假设。最后,我们确定了文献中的差距以及有希望的未来研究方向。
translated by 谷歌翻译
物联网的最新研究已被广泛应用于工业实践,促进了数据和连接设备的指数增长。此后,各方通过某些数据共享策略将访问数据驱动的AI模型。但是,当前大多数培训程序都依赖于集中式数据收集策略和单个计算服务器。但是,这样的集中计划可能会导致许多问题。存储在集中数据库中的客户数据可能会被篡改,因此数据的出处和真实性是不能合理的。一旦出现上述安全问题,训练有素的AI模型的可信度将是值得怀疑的,甚至在测试阶段也可能产生不利的结果。最近,已经探索了行业4.0和Web 3.0的两种核心技术区块链和AI,以促进分散的AI培训策略。为了实现这一目的,我们提出了一种称为Appflchain的新系统体系结构,即基于Hyperledger织物的区块链和联合学习范式的集成体系结构。我们提出的新系统允许不同的各方共同培训AI模型,其客户或利益相关者由基于联盟区块链的网络连接。由于用户不需要向服务器共享敏感的个人信息,因此我们的新系统可以保持高度的安全性和隐私性。为了进行数值评估,我们模拟了现实世界的场景,以说明Appflchain的整个操作过程。仿真结果表明,利用联盟区块链和联邦学习的特征,Appflchain可以证明有利的特性,包括不可耐受性,可追溯性,隐私保护和可靠的决策。
translated by 谷歌翻译
随着AI芯片(例如GPU,TPU和NPU)的改进以及物联网(IOT)的快速发展,一些强大的深神经网络(DNN)通常由数百万甚至数亿个参数组成,这些参数是可能不适合直接部署在低计算和低容量单元(例如边缘设备)上。最近,知识蒸馏(KD)被认为是模型压缩的有效方法之一,以减少模型参数。 KD的主要概念是从大型模型(即教师模型)的特征图中提取有用的信息,以引用成功训练一个小型模型(即学生模型),该模型大小比老师小得多。尽管已经提出了许多基于KD的方法来利用教师模型中中间层的特征图中的信息,但是,它们中的大多数并未考虑教师模型和学生模型之间的特征图的相似性,这可能让学生模型学习无用的信息。受到注意机制的启发,我们提出了一种新颖的KD方法,称为代表教师钥匙(RTK),该方法不仅考虑了特征地图的相似性,而且还会过滤掉无用的信息以提高目标学生模型的性能。在实验中,我们使用多个骨干网络(例如Resnet和wideresnet)和数据集(例如CIFAR10,CIFAR100,SVHN和CINIC10)验证了我们提出的方法。结果表明,我们提出的RTK可以有效地提高基于注意的KD方法的分类精度。
translated by 谷歌翻译